# High-precision reasoning
Andrewzh Absolute Zero Reasoner Coder 14b GGUF
Based on andrewzh's Absolute_Zero_Reasoner-Coder-14b model, this is a version quantized using llama.cpp's imatrix, suitable for reasoning and code generation tasks.
Large Language Model
A
bartowski
1,995
5
Servicenow AI Apriel Nemotron 15b Thinker GGUF
MIT
This is a 15B-parameter large language model built by the ServiceNow Language Model (SLAM) lab, quantized using the llama.cpp tool, suitable for local inference deployment.
Large Language Model
S
bartowski
3,707
11
Qwen3 4B Mishima Imatrix GGUF
Apache-2.0
A Mishima Imatrix quantized version based on Qwen3-4B, enhanced with specific datasets for prose-style generation
Large Language Model
Q
DavidAU
105
2
Phi 4 GGUF
MIT
phi-4 is an open-source language model developed by Microsoft Research, focusing on high-quality data and reasoning capabilities, suitable for memory/computation-constrained environments.
Large Language Model Supports Multiple Languages
P
Mungert
1,508
3
Bytedance Research.ui TARS 72B SFT GGUF
A 72B-parameter multimodal foundation model released by ByteDance Research, specializing in image-text-to-text tasks
Image-to-Text
B
DevQuasar
81
1
Glm 4 9b Hf
Other
GLM-4-9B is an open-source version of the latest generation of pre-trained models in the GLM-4 series launched by Zhipu AI. It performs excellently in the evaluation of datasets such as semantics, mathematics, reasoning, code, and knowledge, and has advanced features such as multilingual support.
Large Language Model
Safetensors Supports Multiple Languages
G
THUDM
1,799
7
Higgs Llama 3 70B
Other
A model post-trained based on Meta-Llama-3-70B, specifically optimized for role-playing tasks while excelling in general domain instruction following and reasoning tasks.
Large Language Model
Transformers

H
bosonai
220
220
Yism 34B 0rn
Apache-2.0
YiSM-34B-0rn is a large language model based on the fusion of Yi-1.5-34B and Yi-1.5-34B-Chat, designed to balance instruction-following capabilities with foundational model characteristics.
Large Language Model
Transformers

Y
altomek
22
2
Internlm2 20b
Other
InternLM2-20B is the second-generation model with 20B parameters, supporting 200k-word ultra-long context, excelling in reasoning, mathematics, and programming capabilities.
Large Language Model
Transformers

I
internlm
6,926
55
Chupacabra 7B V2
Apache-2.0
A 7B-parameter large language model based on the Mistral architecture, utilizing SLERP fusion technology to merge weights from multiple high-performance models
Large Language Model
Transformers

C
perlthoughts
99
35
Featured Recommended AI Models